IBM Advanced Datascience Capstone Project - Soil Moisture Prediction

Objective - Predict Soil Moisture from past observations which includes ground and air temperature, relative humidity, wind speed

In [1]:
import numpy as np
import pandas as pd
import io
import requests
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
Waiting for a Spark session to start...
Spark Initialization Done! ApplicationId = app-20190519201252-0001
KERNEL_ID = d4e5d8a8-ca9f-4f74-8998-31033bfb905f

Data Source - Data was obtained from the Library of United States Department of Agriculture

• Dataset includes hourly hydro-meteorological variables including soil moisture, air temperature and relative humidity from 11 sites in Reynolds Creek in southwestern Idaho

Two Datasets were retrieved.

  1. Weather Dataset
  2. Soil Moisture Dataset
In [2]:
weatherurl="https://data.nal.usda.gov/system/files/weather_data_jdt2b.csv"
s=requests.get(weatherurl).content
weatherData=pd.read_csv(io.StringIO(s.decode('utf-8')))
In [3]:
weatherData.head()
Out[3]:
Date_time WY Year Month Day Hour Minute T_a RH e_a T_d w_s w_d
0 11/5/2005 0:00 2006 2005 11 5 0 0 -9999.0 -9999.0 -9999 -9999.0 -9999.0 -9999.0
1 11/5/2005 1:00 2006 2005 11 5 1 0 -9999.0 -9999.0 -9999 -9999.0 -9999.0 -9999.0
2 11/5/2005 2:00 2006 2005 11 5 2 0 -9999.0 -9999.0 -9999 -9999.0 -9999.0 -9999.0
3 11/5/2005 3:00 2006 2005 11 5 3 0 -9999.0 -9999.0 -9999 -9999.0 -9999.0 -9999.0
4 11/5/2005 4:00 2006 2005 11 5 4 0 -9999.0 -9999.0 -9999 -9999.0 -9999.0 -9999.0
In [4]:
soilmoistureurl="https://data.nal.usda.gov/system/files/rc.tg_.dc_.jd-jdt2b_stm_0.csv"
s=requests.get(soilmoistureurl).content
soilmoistureData=pd.read_csv(io.StringIO(s.decode('utf-8')))
In [5]:
soilmoistureData.head()
Out[5]:
Date_time WY Year Month Day Hour Minute T_g_5 T_g_20 T_g_35 T_g_50 T_g_75 s_m_5 s_m_20 s_m_35 s_m_50 s_m_75
0 10/1/2010 0:00 2011 2010 10 1 0 0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0
1 10/1/2010 1:00 2011 2010 10 1 1 0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0
2 10/1/2010 2:00 2011 2010 10 1 2 0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0
3 10/1/2010 3:00 2011 2010 10 1 3 0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0
4 10/1/2010 4:00 2011 2010 10 1 4 0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0 -9999.0
In [6]:
#Data source documentation indicates null values are denoted by 9999
# Lets remove those
weatherClean = weatherData[(weatherData[['T_a','RH','e_a','T_d','w_s','w_d']] != -9999.000000).all(axis=1)]
In [7]:
#Data source documentation indicates null values are denoted by 9999
# # Lets remove those
soilMoistureClean = soilmoistureData[(soilmoistureData[['T_g_5','T_g_20','T_g_35','T_g_50','T_g_75','s_m_5','s_m_20','s_m_35','s_m_50','s_m_75']] != -9999.000000).all(axis=1)]
In [8]:
#Merge Data with Date being the common column
soilWeatherData = soilMoistureClean.merge(weatherClean,how="inner", on="Date_time")
In [9]:
soilWeatherData.head()
Out[9]:
Date_time WY Year_x Month_x Day_x Hour_x Minute_x T_g_5 T_g_20 T_g_35 ... Month_y Day_y Hour_y Minute_y T_a RH e_a T_d w_s w_d
0 3/4/2011 11:00 2011 2011 3 4 11 0 0.3 1.6 2.1 ... 3 4 11 0 2.8 0.61 456 -3.5 3.1 169.4
1 3/4/2011 12:00 2011 2011 3 4 12 0 1.1 1.6 2.1 ... 3 4 12 0 3.4 0.55 429 -4.2 3.2 190.3
2 3/4/2011 13:00 2011 2011 3 4 13 0 2.4 1.6 2.1 ... 3 4 13 0 3.9 0.52 420 -4.5 3.1 130.2
3 3/4/2011 14:00 2011 2011 3 4 14 0 3.3 1.7 2.0 ... 3 4 14 0 3.6 0.56 443 -3.8 4.0 245.8
4 3/4/2011 15:00 2011 2011 3 4 15 0 4.1 1.9 2.0 ... 3 4 15 0 4.8 0.54 465 -3.3 2.7 135.5

5 rows × 29 columns

In [10]:
#Drop columns not needed and are duplicate
soilWeatherData.dtypes
Out[10]:
Date_time     object
WY             int64
Year_x         int64
Month_x        int64
Day_x          int64
Hour_x         int64
Minute_x       int64
T_g_5        float64
T_g_20       float64
T_g_35       float64
T_g_50       float64
T_g_75       float64
s_m_5        float64
s_m_20       float64
s_m_35       float64
s_m_50       float64
s_m_75       float64
 WY            int64
Year_y         int64
Month_y        int64
Day_y          int64
Hour_y         int64
Minute_y       int64
T_a          float64
RH           float64
e_a            int64
T_d          float64
w_s          float64
w_d          float64
dtype: object
In [11]:
# Date, Time & Hour is not required

soilWeatherConcise = soilWeatherData.filter(['T_g_5','T_g_20','T_g_35','T_g_50','T_g_75','s_m_5','s_m_20','s_m_35','s_m_50','s_m_75','T_a','RH','e_a','T_d','w_s','w_d'],axis=1)
In [12]:
soilWeatherConcise.head()
Out[12]:
T_g_5 T_g_20 T_g_35 T_g_50 T_g_75 s_m_5 s_m_20 s_m_35 s_m_50 s_m_75 T_a RH e_a T_d w_s w_d
0 0.3 1.6 2.1 2.3 2.9 0.241 0.340 0.311 0.319 0.289 2.8 0.61 456 -3.5 3.1 169.4
1 1.1 1.6 2.1 2.3 2.9 0.243 0.340 0.312 0.322 0.288 3.4 0.55 429 -4.2 3.2 190.3
2 2.4 1.6 2.1 2.3 2.9 0.246 0.339 0.312 0.320 0.288 3.9 0.52 420 -4.5 3.1 130.2
3 3.3 1.7 2.0 2.3 2.9 0.247 0.337 0.312 0.323 0.288 3.6 0.56 443 -3.8 4.0 245.8
4 4.1 1.9 2.0 2.3 2.9 0.244 0.340 0.312 0.323 0.288 4.8 0.54 465 -3.3 2.7 135.5
In [13]:
soilWeatherConcise.shape
Out[13]:
(27093, 16)

There are little over 27k observartions with 16 attributes

In [ ]:
 
In [14]:
soilWeatherConcise.describe()
Out[14]:
T_g_5 T_g_20 T_g_35 T_g_50 T_g_75 s_m_5 s_m_20 s_m_35 s_m_50 s_m_75 T_a RH e_a T_d w_s w_d
count 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000 27093.000000
mean 12.653412 12.525926 12.328476 12.058214 11.765740 0.134306 0.225555 0.210865 0.217434 0.204170 9.381146 0.507501 569.343078 -1.765146 2.778345 210.770749
std 10.120863 8.716143 8.141291 7.606521 6.742225 0.064648 0.072174 0.069939 0.069741 0.071845 9.801322 0.225752 249.478514 5.751526 1.733648 91.465950
min -2.000000 0.200000 0.600000 1.300000 2.200000 0.038000 0.132000 0.120000 0.126000 0.121000 -16.100000 0.060000 55.000000 -26.400000 0.400000 0.000000
25% 2.900000 3.900000 4.200000 4.300000 5.000000 0.073000 0.163000 0.151000 0.160000 0.144000 1.900000 0.310000 381.000000 -5.600000 1.700000 153.700000
50% 11.500000 12.000000 11.900000 11.600000 11.200000 0.121000 0.184000 0.175000 0.180000 0.162000 8.100000 0.490000 542.000000 -1.400000 2.300000 226.200000
75% 21.100000 21.100000 20.500000 19.500000 18.000000 0.193000 0.301000 0.292000 0.292000 0.291000 17.400000 0.690000 715.000000 2.200000 3.200000 288.600000
max 38.200000 29.200000 26.900000 25.100000 23.500000 0.313000 0.382000 0.344000 0.352000 0.336000 36.200000 1.000000 1825.000000 16.100000 14.300000 348.800000
In [15]:
#Data Visualization

Pairs plots help explore distributions and relationships betwen dependent and independent variables. Seaborn package is great in visualization of data. A pairs plot gives a comprehensive first look at our data set and a great starting point for the data analysis of soil mositure prediction project

In [16]:
# Seaborn visualization library
import seaborn as sns
# Create the default pairplot
sns.pairplot(soilWeatherConcise)
Out[16]:
<seaborn.axisgrid.PairGrid at 0x7f6e1ace47b8>

Let's understand the correlation between the variables

Here goes the Heatmap

In [17]:
corr = soilWeatherConcise.corr()
#Plot figsize
fig, ax = plt.subplots(figsize=(10, 10))
#Generate Color Map
colormap = sns.diverging_palette(220, 10, as_cmap=True)
#Generate Heat Map, allow annotations and place floats in map
sns.heatmap(corr, cmap=colormap, annot=True, fmt=".2f")
#Apply xticks
plt.xticks(range(len(corr.columns)), corr.columns);
#Apply yticks
plt.yticks(range(len(corr.columns)), corr.columns)
#show plot
plt.show()

Temperature and Soil moisture has a good correlation, especially at 35 and 20 cm depth.

In [18]:
y = soilWeatherConcise.pop('s_m_35')
X = soilWeatherConcise
In [19]:
#split the data into train and test
from sklearn.model_selection import train_test_split
In [20]:
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=0)
In [21]:
#normalize data
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
X_train= min_max_scaler.fit_transform(X_train)
X_test = min_max_scaler.transform(X_test)
/opt/ibm/conda/miniconda36/lib/python3.6/site-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by MinMaxScaler.
  return self.partial_fit(X, y)
In [22]:
def applyModel(regressor):
    regressor.fit(X_train,y_train)
    y_pred=regressor.predict(X_test)
    print(regressor.score(X_test,y_test))
    return y_pred
In [23]:
def printMetrics(y_pred):
    from sklearn.metrics import r2_score,mean_squared_error,mean_absolute_error
    from math import sqrt
    print("R2 score is", r2_score(y_test,y_pred))
    y_pred_pd = pd.DataFrame(y_pred,columns=['s_m_35'])
    #Calculate individual metrics
    mean_squared_error = mean_squared_error(y_test, y_pred_pd)
    mean_absolute_error = mean_absolute_error(y_test,y_pred_pd)
    print("Root Mean Squared Error = ",sqrt(mean_squared_error) )
    print("Mean Absolute Error = ",mean_absolute_error)
In [24]:
def plotResults(y_pred_pd):
    fig, ax = plt.subplots()
    ax.scatter(y_test, y_pred_pd)
    ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'k--', lw=4)
    ax.set_xlabel('Measured Soil Moisture(%)')
    ax.set_ylabel('Predicted Soil Moisture(%)')
    ax.set_title("Soil Moisture at 35cm from Ground level")
    plt.show()
In [25]:
from sklearn.svm import SVR
print("SVM Regressor with Kernet set to linear")
# regressor with kernel set to linear
regressor=SVR(kernel='linear',degree=3)
y_pred_Lin1 = applyModel(regressor)
y_pred_pd = pd.DataFrame(y_pred_Lin1,columns=['s_m_35'])
printMetrics(y_pred_Lin1)
plotResults(y_pred_pd)
SVM Regressor with Kernet set to linear
0.12938613334794624
R2 score is 0.12938613334794624
Root Mean Squared Error =  0.06538535455154174
Mean Absolute Error =  0.06267804132331506
In [26]:
print("SVM Regressor with Kernet set to RBF and epsilon set to 0.1")
# regressor with kernel set to rbf
regressor=SVR(kernel='rbf',epsilon=0.1)
y_pred_RBF1 = applyModel(regressor)
y_pred_pd = pd.DataFrame(y_pred_RBF1,columns=['s_m_35'])
printMetrics(y_pred_RBF1)
plotResults(y_pred_pd)
SVM Regressor with Kernet set to RBF and epsilon set to 0.1
0.0801407416104023
R2 score is 0.0801407416104023
Root Mean Squared Error =  0.06720914722503107
Mean Absolute Error =  0.06446338216617296
/opt/ibm/conda/miniconda36/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
  "avoid this warning.", FutureWarning)
In [27]:
# regressor with kernel set to poly and epsilon = 0.1
print("SVM Regressor with Kernet set to poly and epsilon set to 0.1")
regressor=SVR(kernel='poly',epsilon=0.1)
y_pred_Poly1 = applyModel(regressor)
y_pred_pd = pd.DataFrame(y_pred_Poly1,columns=['s_m_35'])
printMetrics(y_pred_Poly1)
plotResults(y_pred_pd)
SVM Regressor with Kernet set to poly and epsilon set to 0.1
0.07056323227305761
R2 score is 0.07056323227305761
Root Mean Squared Error =  0.06755812961867491
Mean Absolute Error =  0.06454052383468614
/opt/ibm/conda/miniconda36/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
  "avoid this warning.", FutureWarning)
In [28]:
from sklearn.svm import SVR
print("SVM Regressor with Kernet set to linear and epsilon set to 0.01")
# regressor with kernel set to linear
regressor=SVR(kernel='linear',degree=3, epsilon=0.01)
y_pred_Lin2 = applyModel(regressor)
y_pred_pd = pd.DataFrame(y_pred_Lin2,columns=['s_m_35'])
printMetrics(y_pred_Lin2)
plotResults(y_pred_pd)
SVM Regressor with Kernet set to linear and epsilon set to 0.01
0.9511259419035354
R2 score is 0.9511259419035354
Root Mean Squared Error =  0.015491978777319913
Mean Absolute Error =  0.00892504332713058
In [29]:
# regressor with kernel set to rbf and epsilon = 0.01
print("SVM Regressor with Kernet set to RBF and epsilon set to 0.01")
regressor=SVR(kernel='rbf',epsilon=0.01)
y_pred_RBF2 = applyModel(regressor)
y_pred_pd = pd.DataFrame(y_pred_RBF2,columns=['s_m_35'])
printMetrics(y_pred_RBF2)
plotResults(y_pred_pd)
SVM Regressor with Kernet set to RBF and epsilon set to 0.01
/opt/ibm/conda/miniconda36/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
  "avoid this warning.", FutureWarning)
0.9679058131755272
R2 score is 0.9679058131755272
Root Mean Squared Error =  0.01255396406533813
Mean Absolute Error =  0.0062827879994049795
In [30]:
# regressor with kernel set to rbf and epsilon = 0.01
print("SVM Regressor with Kernel set to POLY and epsilon set to 0.01")
regressor=SVR(kernel='poly',epsilon=0.01)
y_pred_poly2 = applyModel(regressor)
y_pred_pd = pd.DataFrame(y_pred_poly2,columns=['s_m_35'])
printMetrics(y_pred_poly2)
plotResults(y_pred_pd)
SVM Regressor with Kernel set to POLY and epsilon set to 0.01
/opt/ibm/conda/miniconda36/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
  "avoid this warning.", FutureWarning)
0.9399930534376719
R2 score is 0.9399930534376719
Root Mean Squared Error =  0.017165974268269192
Mean Absolute Error =  0.010515358599804944
In [31]:
# Lets try Neural Networks
from keras.models import Sequential
from keras.layers import Dense
Using TensorFlow backend.
In [32]:
# create model
model = Sequential()
model.add(Dense(64, input_dim=15, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
# Compile model
batch_size = 100
model.compile(loss='mse', optimizer='adam')
#train model
model.fit(X_train,y_train,epochs=100,batch_size=batch_size)
# make a prediction
ypredNN = model.predict(X_test)
WARNING:tensorflow:From /opt/ibm/conda/miniconda36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /opt/ibm/conda/miniconda36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/100
21674/21674 [==============================] - 6s 258us/step - loss: 0.0029
Epoch 2/100
21674/21674 [==============================] - 5s 217us/step - loss: 1.7478e-04
Epoch 3/100
21674/21674 [==============================] - 6s 254us/step - loss: 1.2671e-04
Epoch 4/100
21674/21674 [==============================] - 4s 194us/step - loss: 1.0094e-04
Epoch 5/100
21674/21674 [==============================] - 7s 318us/step - loss: 8.9548e-05
Epoch 6/100
21674/21674 [==============================] - 5s 217us/step - loss: 8.4161e-05
Epoch 7/100
21674/21674 [==============================] - 6s 254us/step - loss: 8.0047e-05
Epoch 8/100
21674/21674 [==============================] - 4s 194us/step - loss: 7.8821e-05
Epoch 9/100
21674/21674 [==============================] - 6s 258us/step - loss: 7.4683e-05
Epoch 10/100
21674/21674 [==============================] - 6s 281us/step - loss: 7.1086e-05
Epoch 11/100
21674/21674 [==============================] - 6s 258us/step - loss: 6.7664e-05
Epoch 12/100
21674/21674 [==============================] - 5s 231us/step - loss: 6.6918e-05
Epoch 13/100
21674/21674 [==============================] - 6s 286us/step - loss: 6.2969e-05
Epoch 14/100
21674/21674 [==============================] - 9s 406us/step - loss: 5.6033e-05
Epoch 15/100
21674/21674 [==============================] - 6s 258us/step - loss: 5.7742e-05
Epoch 16/100
21674/21674 [==============================] - 5s 231us/step - loss: 5.3419e-05
Epoch 17/100
21674/21674 [==============================] - 8s 350us/step - loss: 5.0255e-05
Epoch 18/100
21674/21674 [==============================] - 7s 309us/step - loss: 4.2492e-05
Epoch 19/100
21674/21674 [==============================] - 8s 382us/step - loss: 4.4557e-05
Epoch 20/100
21674/21674 [==============================] - 6s 263us/step - loss: 4.2041e-05
Epoch 21/100
21674/21674 [==============================] - 7s 318us/step - loss: 3.9532e-05
Epoch 22/100
21674/21674 [==============================] - 7s 304us/step - loss: 3.8262e-05
Epoch 23/100
21674/21674 [==============================] - 6s 273us/step - loss: 3.8154e-05
Epoch 24/100
21674/21674 [==============================] - 5s 217us/step - loss: 3.5513e-05
Epoch 25/100
21674/21674 [==============================] - 4s 207us/step - loss: 3.0839e-05
Epoch 26/100
21674/21674 [==============================] - 5s 244us/step - loss: 3.5397e-05
Epoch 27/100
21674/21674 [==============================] - 6s 263us/step - loss: 3.8284e-05
Epoch 28/100
21674/21674 [==============================] - 6s 291us/step - loss: 3.2656e-05
Epoch 29/100
21674/21674 [==============================] - 5s 217us/step - loss: 3.1045e-05
Epoch 30/100
21674/21674 [==============================] - 5s 212us/step - loss: 3.3476e-05
Epoch 31/100
21674/21674 [==============================] - 6s 254us/step - loss: 3.6309e-05
Epoch 32/100
21674/21674 [==============================] - 7s 309us/step - loss: 3.0046e-05
Epoch 33/100
21674/21674 [==============================] - 5s 212us/step - loss: 2.7533e-05
Epoch 34/100
21674/21674 [==============================] - 4s 162us/step - loss: 2.7467e-05
Epoch 35/100
21674/21674 [==============================] - 6s 263us/step - loss: 3.0052e-05 0s - loss: 
Epoch 36/100
21674/21674 [==============================] - 5s 235us/step - loss: 2.6057e-05
Epoch 37/100
21674/21674 [==============================] - 4s 203us/step - loss: 2.5948e-05
Epoch 38/100
21674/21674 [==============================] - 4s 203us/step - loss: 2.6858e-05
Epoch 39/100
21674/21674 [==============================] - 5s 235us/step - loss: 2.5299e-05
Epoch 40/100
21674/21674 [==============================] - 5s 208us/step - loss: 3.0654e-05
Epoch 41/100
21674/21674 [==============================] - 4s 189us/step - loss: 2.3706e-05
Epoch 42/100
21674/21674 [==============================] - 5s 222us/step - loss: 2.4755e-05
Epoch 43/100
21674/21674 [==============================] - 5s 235us/step - loss: 2.5205e-05
Epoch 44/100
21674/21674 [==============================] - 5s 212us/step - loss: 2.7615e-05
Epoch 45/100
21674/21674 [==============================] - 5s 222us/step - loss: 2.2539e-05
Epoch 46/100
21674/21674 [==============================] - 4s 203us/step - loss: 2.4012e-05
Epoch 47/100
21674/21674 [==============================] - 4s 199us/step - loss: 2.4655e-05
Epoch 48/100
21674/21674 [==============================] - 6s 300us/step - loss: 2.0319e-05
Epoch 49/100
21674/21674 [==============================] - 6s 276us/step - loss: 2.3160e-05
Epoch 50/100
21674/21674 [==============================] - 4s 194us/step - loss: 2.4072e-05
Epoch 51/100
21674/21674 [==============================] - 4s 198us/step - loss: 2.1938e-05
Epoch 52/100
21674/21674 [==============================] - 4s 166us/step - loss: 2.1375e-05
Epoch 53/100
21674/21674 [==============================] - 7s 341us/step - loss: 2.0021e-05
Epoch 54/100
21674/21674 [==============================] - 5s 241us/step - loss: 2.1437e-05
Epoch 55/100
21674/21674 [==============================] - 5s 250us/step - loss: 2.0737e-05
Epoch 56/100
21674/21674 [==============================] - 7s 341us/step - loss: 2.9658e-05
Epoch 57/100
21674/21674 [==============================] - 8s 383us/step - loss: 2.0712e-05
Epoch 58/100
21674/21674 [==============================] - 7s 314us/step - loss: 1.8298e-05
Epoch 59/100
21674/21674 [==============================] - 6s 254us/step - loss: 2.1113e-05
Epoch 60/100
21674/21674 [==============================] - 5s 216us/step - loss: 1.8405e-05
Epoch 61/100
21674/21674 [==============================] - 5s 245us/step - loss: 1.9392e-05
Epoch 62/100
21674/21674 [==============================] - 5s 253us/step - loss: 1.8237e-05
Epoch 63/100
21674/21674 [==============================] - 5s 245us/step - loss: 1.9483e-05
Epoch 64/100
21674/21674 [==============================] - 10s 475us/step - loss: 1.9672e-05
Epoch 65/100
21674/21674 [==============================] - 13s 582us/step - loss: 1.9277e-05
Epoch 66/100
21674/21674 [==============================] - 7s 318us/step - loss: 2.1514e-05
Epoch 67/100
21674/21674 [==============================] - 5s 217us/step - loss: 1.5658e-05
Epoch 68/100
21674/21674 [==============================] - 9s 397us/step - loss: 1.7066e-05
Epoch 69/100
21674/21674 [==============================] - 9s 415us/step - loss: 1.7205e-05
Epoch 70/100
21674/21674 [==============================] - 7s 337us/step - loss: 2.2963e-05
Epoch 71/100
21674/21674 [==============================] - 6s 272us/step - loss: 1.9983e-05
Epoch 72/100
21674/21674 [==============================] - 5s 236us/step - loss: 1.7931e-05
Epoch 73/100
21674/21674 [==============================] - 6s 281us/step - loss: 1.7578e-05
Epoch 74/100
21674/21674 [==============================] - 6s 258us/step - loss: 1.5117e-05
Epoch 75/100
21674/21674 [==============================] - 6s 291us/step - loss: 1.5557e-05
Epoch 76/100
21674/21674 [==============================] - 4s 190us/step - loss: 1.8330e-05
Epoch 77/100
21674/21674 [==============================] - 5s 208us/step - loss: 1.5307e-05
Epoch 78/100
21674/21674 [==============================] - 6s 263us/step - loss: 1.7290e-05
Epoch 79/100
21674/21674 [==============================] - 6s 286us/step - loss: 1.4764e-05
Epoch 80/100
21674/21674 [==============================] - 5s 231us/step - loss: 1.7548e-05
Epoch 81/100
21674/21674 [==============================] - 5s 253us/step - loss: 1.5467e-05
Epoch 82/100
21674/21674 [==============================] - 7s 342us/step - loss: 1.6561e-05
Epoch 83/100
21674/21674 [==============================] - 6s 295us/step - loss: 1.7355e-05
Epoch 84/100
21674/21674 [==============================] - 14s 650us/step - loss: 1.6958e-05
Epoch 85/100
21674/21674 [==============================] - 6s 282us/step - loss: 1.3996e-05
Epoch 86/100
21674/21674 [==============================] - 7s 309us/step - loss: 1.6224e-05
Epoch 87/100
21674/21674 [==============================] - 5s 217us/step - loss: 1.9365e-05
Epoch 88/100
21674/21674 [==============================] - 3s 157us/step - loss: 1.4718e-05
Epoch 89/100
21674/21674 [==============================] - 4s 180us/step - loss: 1.4701e-05
Epoch 90/100
21674/21674 [==============================] - 5s 217us/step - loss: 1.4897e-05
Epoch 91/100
21674/21674 [==============================] - 5s 230us/step - loss: 1.3831e-05
Epoch 92/100
21674/21674 [==============================] - 5s 253us/step - loss: 1.3907e-05
Epoch 93/100
21674/21674 [==============================] - 5s 226us/step - loss: 1.2801e-05
Epoch 94/100
21674/21674 [==============================] - 4s 161us/step - loss: 1.4359e-05
Epoch 95/100
21674/21674 [==============================] - 4s 203us/step - loss: 1.8210e-05
Epoch 96/100
21674/21674 [==============================] - 6s 277us/step - loss: 1.3821e-05
Epoch 97/100
21674/21674 [==============================] - 5s 226us/step - loss: 1.1627e-05
Epoch 98/100
21674/21674 [==============================] - 6s 259us/step - loss: 1.2710e-05
Epoch 99/100
21674/21674 [==============================] - 5s 222us/step - loss: 1.4568e-05
Epoch 100/100
21674/21674 [==============================] - 8s 355us/step - loss: 1.3045e-05
In [33]:
y_pred_pd = pd.DataFrame(ypredNN,columns=['s_m_35'])
printMetrics(ypredNN)
plotResults(y_pred_pd)
R2 score is 0.9969032120180933
Root Mean Squared Error =  0.003899630239487445
Mean Absolute Error =  0.0024287969682418586

Conclusion - Neural networks perform better than Support Vector Regression. I am hopeful with further tuning of hyper parameteres in SVR, it would also perform in par with Neual Network for regression.

In [ ]: